Goto

Collaborating Authors

 Ashland County


Controllable Contextualized Image Captioning: Directing the Visual Narrative through User-Defined Highlights

Mao, Shunqi, Zhang, Chaoyi, Su, Hang, Song, Hwanjun, Shalyminov, Igor, Cai, Weidong

arXiv.org Artificial Intelligence

Contextualized Image Captioning (CIC) evolves traditional image captioning into a more complex domain, necessitating the ability for multimodal reasoning. It aims to generate image captions given specific contextual information. This paper further introduces a novel domain of Controllable Contextualized Image Captioning (Ctrl-CIC). Unlike CIC, which solely relies on broad context, Ctrl-CIC accentuates a user-defined highlight, compelling the model to tailor captions that resonate with the highlighted aspects of the context. We present two approaches, Prompting-based Controller (P-Ctrl) and Recalibration-based Controller (R-Ctrl), to generate focused captions. P-Ctrl conditions the model generation on highlight by prepending captions with highlight-driven prefixes, whereas R-Ctrl tunes the model to selectively recalibrate the encoder embeddings for highlighted tokens. Additionally, we design a GPT-4V empowered evaluator to assess the quality of the controlled captions alongside standard assessment methods. Extensive experimental results demonstrate the efficient and effective controllability of our method, charting a new direction in achieving user-adaptive image captioning. Code is available at https://github.com/ShunqiM/Ctrl-CIC .


Speciesist bias in AI -- How AI applications perpetuate discrimination and unfair outcomes against animals

Hagendorff, Thilo, Bossert, Leonie, Fai, Tse Yip, Singer, Peter

arXiv.org Artificial Intelligence

Massive efforts are made to reduce biases in both data and algorithms in order to render AI applications fair. These efforts are propelled by various high-profile cases where biased algorithmic decision-making caused harm to women, people of color, minorities, etc. However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals. This paper is the first to describe the 'speciesist bias' and investigate it in several different AI systems. Speciesist biases are learned and solidified by AI applications when they are trained on datasets in which speciesist patterns prevail. These patterns can be found in image recognition systems, large language models, and recommender systems. Therefore, AI technologies currently play a significant role in perpetuating and normalizing violence against animals. This can only be changed when AI fairness frameworks widen their scope and include mitigation measures for speciesist biases. This paper addresses the AI community in this regard and stresses the influence AI systems can have on either increasing or reducing the violence that is inflicted on animals, and especially on farmed animals.


More charges likely in case of abduction, bodies found

U.S. News

This photo provided by Ashland County Sheriff Office shows Shawn M. Grate. Grate, was arrested Tuesday, Sept. 13, 2016, in Ashland, Ohio in connection to the investigation of a rescued abductee and the discovery of the remains of two people in the home where he was arrested.